城市土地覆盖的时间序列数据在分析城市增长模式方面具有很大的效用,不透水表面和植被的分布变化以及对城市微观气候产生影响。虽然Landsat数据非常适于这种分析,但由于长时间系列的免费图像,传统的每像素硬分类未能产生Landsat数据的全部潜力。本文提出了一种子像素分类方法,其利用Landsat-5 TM和Resorational-1 Liss-IV传感器的时间重叠。我们训练卷积神经网络,预测30米Landsat-5 TM数据的分数陆地覆盖。从2011年的Bengaluru的一个艰难的5.8M Liss-IV图像估计参考陆地覆盖分数。此外,我们从2009年使用Mumbai数据并将其与使用的结果进行了概括和卓越的性能随机森林分类器。对于Bengaluru(2011)和Mumbai(2009)数据,我们的CNN模型的平均绝对百分比误差在30M细胞水平上的内置和植被分数预测的7.2至11.3。与最近的最近的研究不同,在使用数据在空间范围进行有限的空间范围进行验证,我们的模型已经过度培训并验证了两个不同时间段的两个Mega城市的完整空间范围的数据。因此,它可以可靠地从Landsat-5 TM时间序列数据中可靠地产生30M内置和植被分数图,以分析长期城市增长模式。
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译
The rapid growth of machine translation (MT) systems has necessitated comprehensive studies to meta-evaluate evaluation metrics being used, which enables a better selection of metrics that best reflect MT quality. Unfortunately, most of the research focuses on high-resource languages, mainly English, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from English, and to date, there has not been a systematic study of evaluating MT systems from English into Indian languages. In this paper, we fill this gap by creating an MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, and use it to establish correlations between annotator scores and scores obtained using existing automatic metrics. Our results show that pre-trained metrics, such as COMET, have the highest correlations with annotator scores. Additionally, we find that the metrics do not adequately capture fluency-based errors in Indian languages, and there is a need to develop metrics focused on Indian languages. We hope that our dataset and analysis will help promote further research in this area.
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
In recent years the importance of Smart Healthcare cannot be overstated. The current work proposed to expand the state-of-art of smart healthcare in integrating solutions for Obsessive Compulsive Disorder (OCD). Identification of OCD from oxidative stress biomarkers (OSBs) using machine learning is an important development in the study of OCD. However, this process involves the collection of OCD class labels from hospitals, collection of corresponding OSBs from biochemical laboratories, integrated and labeled dataset creation, use of suitable machine learning algorithm for designing OCD prediction model, and making these prediction models available for different biochemical laboratories for OCD prediction for unlabeled OSBs. Further, from time to time, with significant growth in the volume of the dataset with labeled samples, redesigning the prediction model is required for further use. The whole process requires distributed data collection, data integration, coordination between the hospital and biochemical laboratory, dynamic machine learning OCD prediction mode design using a suitable machine learning algorithm, and making the machine learning model available for the biochemical laboratories. Keeping all these things in mind, Accu-Help a fully automated, smart, and accurate OCD detection conceptual model is proposed to help the biochemical laboratories for efficient detection of OCD from OSBs. OSBs are classified into three classes: Healthy Individual (HI), OCD Affected Individual (OAI), and Genetically Affected Individual (GAI). The main component of this proposed framework is the machine learning OCD prediction model design. In this Accu-Help, a neural network-based approach is presented with an OCD prediction accuracy of 86 percent.
translated by 谷歌翻译
Deep learning based text-to-speech (TTS) systems have been evolving rapidly with advances in model architectures, training methodologies, and generalization across speakers and languages. However, these advances have not been thoroughly investigated for Indian language speech synthesis. Such investigation is computationally expensive given the number and diversity of Indian languages, relatively lower resource availability, and the diverse set of advances in neural TTS that remain untested. In this paper, we evaluate the choice of acoustic models, vocoders, supplementary loss functions, training schedules, and speaker and language diversity for Dravidian and Indo-Aryan languages. Based on this, we identify monolingual models with FastPitch and HiFi-GAN V1, trained jointly on male and female speakers to perform the best. With this setup, we train and evaluate TTS models for 13 languages and find our models to significantly improve upon existing models in all languages as measured by mean opinion scores. We open-source all models on the Bhashini platform.
translated by 谷歌翻译
视力变压器由于其出色的性能而越来越多地嵌入工业系统中,但是它们的记忆力和力量要求使它们部署到边缘设备是一项艰巨的任务。因此,现在,模型压缩技术被广泛用于在边缘设备上部署模型,因为它们减少了资源需求并使模型推理非常快速有效。但是,从安全角度来看,它们的可靠性和鲁棒性是安全至关重要应用中的另一个主要问题。对抗性攻击就像ML算法的光学幻象一样,它们可能会严重影响模型的准确性和可靠性。在这项工作中,我们研究了对抗样品在SOTA视觉变压器模型上跨3个SOTA压缩版本的可传递性,并推断出不同压缩技术对对抗攻击的影响。
translated by 谷歌翻译
动力学受部分微分方程(PDE)控制的物理系统在许多领域(从工程设计到天气预报)中找到了应用。从此类PDE中获取解决方案的过程对于大规模和参数化问题的计算昂贵。在这项工作中,使用LSTM和TCN等时间表预测开发的深度学习技术,或用于为CNN等空间功能提取而开发的,用于建模系统动力学,以占主导问题。这些模型将输入作为从PDE获得的连续时间步长的一系列高保真矢量解,并预测使用自动回归的后续时间步长的解决方案;从而减少获得此类高保真解决方案所需的计算时间和功率。这些模型经过数值基准测试(1D汉堡的方程式和Stoker的大坝断裂问题),以评估长期预测准确性,甚至在训练域之外(外推)。在向预测模型输入之前,使用非侵入性的降低订购建模技术(例如深度自动编码网络)来压缩高保真快照,以减少在线和离线阶段的复杂性和所需的计算。深层合奏被用来对预测模型进行不确定性量化,该模型提供了有关认知不确定性导致预测方差的信息。
translated by 谷歌翻译
尽管表示学习对于机器学习和人工智能的兴起至关重要,但仍有一个关键问题在使学习的表示有意义。为此,典型的方法是通过先前的概率分布正规化学习的表示形式。但是,这样的先验通常不可用或临时。为了解决这个问题,我们提出了一个动态约束的表示学习框架。我们不使用预定义的概率,而是将潜在表示限制为遵循特定的动力学,这是在动态系统中的表示形式学习的更自然的约束。我们的信念源于物理学的基本观察,尽管不同的系统可以具有不同的边缘化概率分布,但它们通常遵守相同的动态,例如牛顿和施罗宾格的方程。我们验证了不同系统的框架,包括真实的荧光DNA电影数据集。我们表明,我们的算法可以唯一识别不相关的,等距和有意义的潜在表示。
translated by 谷歌翻译